Goto

Collaborating Authors

 identity attack


Automated Adversarial Discovery for Safety Classifiers

Lal, Yash Kumar, Lahoti, Preethi, Sinha, Aradhana, Qin, Yao, Balashankar, Ananth

arXiv.org Artificial Intelligence

Safety classifiers are critical in mitigating toxicity on online forums such as social media and in chatbots. Still, they continue to be vulnerable to emergent, and often innumerable, adversarial attacks. Traditional automated adversarial data generation methods, however, tend to produce attacks that are not diverse, but variations of previously observed harm types. We formalize the task of automated adversarial discovery for safety classifiers - to find new attacks along previously unseen harm dimensions that expose new weaknesses in the classifier. We measure progress on this task along two key axes (1) adversarial success: does the attack fool the classifier? and (2) dimensional diversity: does the attack represent a previously unseen harm type? Our evaluation of existing attack generation methods on the CivilComments toxicity task reveals their limitations: Word perturbation attacks fail to fool classifiers, while prompt-based LLM attacks have more adversarial success, but lack dimensional diversity. Even our best-performing prompt-based method finds new successful attacks on unseen harm dimensions of attacks only 5\% of the time. Automatically finding new harmful dimensions of attack is crucial and there is substantial headroom for future research on our new task.


RTP-LX: Can LLMs Evaluate Toxicity in Multilingual Scenarios?

de Wynter, Adrian, Watts, Ishaan, Altıntoprak, Nektar Ege, Wongsangaroonsri, Tua, Zhang, Minghui, Farra, Noura, Baur, Lena, Claudet, Samantha, Gajdusek, Pavel, Gören, Can, Gu, Qilong, Kaminska, Anna, Kaminski, Tomasz, Kuo, Ruby, Kyuba, Akiko, Lee, Jongho, Mathur, Kartik, Merok, Petter, Milovanović, Ivana, Paananen, Nani, Paananen, Vesa-Matti, Pavlenko, Anna, Vidal, Bruno Pereira, Strika, Luciano, Tsao, Yueh, Turcato, Davide, Vakhno, Oleksandr, Velcsov, Judit, Vickers, Anna, Visser, Stéphanie, Widarmanto, Herdyan, Zaikin, Andrey, Chen, Si-Qing

arXiv.org Artificial Intelligence

Large language models (LLMs) and small language models (SLMs) are being adopted at remarkable speed, although their safety still remains a serious concern. With the advent of multilingual S/LLMs, the question now becomes a matter of scale: can we expand multilingual safety evaluations of these models with the same velocity at which they are deployed? To this end we introduce RTP-LX, a human-transcreated and human-annotated corpus of toxic prompts and outputs in 28 languages. RTP-LX follows participatory design practices, and a portion of the corpus is especially designed to detect culturally-specific toxic language. We evaluate seven S/LLMs on their ability to detect toxic content in a culturally-sensitive, multilingual scenario. We find that, although they typically score acceptably in terms of accuracy, they have low agreement with human judges when judging holistically the toxicity of a prompt, and have difficulty discerning harm in context-dependent scenarios, particularly with subtle-yet-harmful content (e.g. microagressions, bias). We release of this dataset to contribute to further reduce harmful uses of these models and improve their safe deployment.


Machine Learning Technique to Detect Sybil Attack on IoT Based Sensor Network

#artificialintelligence

Internet of Things (IoT) devices is getting more and more usage rate day by day with the developments in wireless sensor networks. The heterogeneous network formed by the interconnection of all IoT devices is highly vulnerable to external attacks. Many routing protocol attacks have been put forward, and the attacks continue to increase and diversify daily. However, the proposed detection and prevention methods need to be improved and updated according to today's conditions. False identity attacks are included in the IoT network layer routing protocol (Routing Protocol for Low-Power and Lossy Network, RPL).


Differential Anomaly Detection for Facial Images

Ibsen, Mathias, González-Soler, Lázaro J., Rathgeb, Christian, Drozdowski, Pawel, Gomez-Barrero, Marta, Busch, Christoph

arXiv.org Artificial Intelligence

Due to their convenience and high accuracy, face recognition systems are widely employed in governmental and personal security applications to automatically recognise individuals. Despite recent advances, face recognition systems have shown to be particularly vulnerable to identity attacks (i.e., digital manipulations and attack presentations). Identity attacks pose a big security threat as they can be used to gain unauthorised access and spread misinformation. In this context, most algorithms for detecting identity attacks generalise poorly to attack types that are unknown at training time. To tackle this problem, we introduce a differential anomaly detection framework in which deep face embeddings are first extracted from pairs of images (i.e., reference and probe) and then combined for identity attack detection. The experimental evaluation conducted over several databases shows a high generalisation capability of the proposed method for detecting unknown attacks in both the digital and physical domains.